Goto

Collaborating Authors

 artifact reduction


ReMAR-DS: Recalibrated Feature Learning for Metal Artifact Reduction and CT Domain Transformation

Rehman, Mubashara, Martinel, Niki, Avanzo, Michele, Spizzo, Riccardo, Micheloni, Christian

arXiv.org Artificial Intelligence

Artifacts in kilo-Voltage CT (kVCT) imaging degrade image quality, impacting clinical decisions. We propose a deep learning framework for metal artifact reduction (MAR) and domain transformation from kVCT to Mega-Voltage CT (MVCT). The proposed framework, ReMAR-DS, utilizes an encoder-decoder architecture with enhanced feature recalibration, effectively reducing artifacts while preserving anatomical structures. This ensures that only relevant information is utilized in the reconstruction process. By infusing recalibrated features from the encoder block, the model focuses on relevant spatial regions (e.g., areas with artifacts) and highlights key features across channels (e.g., anatomical structures), leading to improved reconstruction of artifact-corrupted regions. Unlike traditional MAR methods, our approach bridges the gap between high-resolution kVCT and artifact-resistant MVCT, enhancing radiotherapy planning. It produces high-quality MVCT-like reconstructions, validated through qualitative and quantitative evaluations. Clinically, this enables oncologists to rely on kVCT alone, reducing repeated high-dose MVCT scans and lowering radiation exposure for cancer patients.


MAR-DTN: Metal Artifact Reduction using Domain Transformation Network for Radiotherapy Planning

Serrano-Antón, Belén, Rehman, Mubashara, Martinel, Niki, Avanzo, Michele, Spizzo, Riccardo, Fanetti, Giuseppe, Muñuzuri, Alberto P., Micheloni, Christian

arXiv.org Artificial Intelligence

For the planning of radiotherapy treatments for head and neck cancers, Computed Tomography (CT) scans of the patients are typically employed. However, in patients with head and neck cancer, the quality of standard CT scans generated using kilo-Voltage (kVCT) tube potentials is severely degraded by streak artifacts occurring in the presence of metallic implants such as dental fillings. Some radiotherapy devices offer the possibility of acquiring Mega-Voltage CT (MVCT) for daily patient setup verification, due to the higher energy of X-rays used, MVCT scans are almost entirely free from artifacts making them more suitable for radiotherapy treatment planning. In this study, we leverage the advantages of kVCT scans with those of MVCT scans (artifact-free). We propose a deep learning-based approach capable of generating artifact-free MVCT images from acquired kVCT images. The outcome offers the benefits of artifact-free MVCT images with enhanced soft tissue contrast, harnessing valuable information obtained through kVCT technology for precise therapy calibration. Our proposed method employs UNet-inspired model, and is compared with adversarial learning and transformer networks. This first and unique approach achieves remarkable success, with PSNR of 30.02 dB across the entire patient volume and 27.47 dB in artifact-affected regions exclusively. It is worth noting that the PSNR calculation excludes the background, concentrating solely on the region of interest.


Sensitivity Decouple Learning for Image Compression Artifacts Reduction

Ma, Li, Zhao, Yifan, Peng, Peixi, Tian, Yonghong

arXiv.org Artificial Intelligence

With the benefit of deep learning techniques, recent researches have made significant progress in image compression artifacts reduction. Despite their improved performances, prevailing methods only focus on learning a mapping from the compressed image to the original one but ignore the intrinsic attributes of the given compressed images, which greatly harms the performance of downstream parsing tasks. Different from these methods, we propose to decouple the intrinsic attributes into two complementary features for artifacts reduction,ie, the compression-insensitive features to regularize the high-level semantic representations during training and the compression-sensitive features to be aware of the compression degree. To achieve this, we first employ adversarial training to regularize the compressed and original encoded features for retaining high-level semantics, and we then develop the compression quality-aware feature encoder for compression-sensitive features. Based on these dual complementary features, we propose a Dual Awareness Guidance Network (DAGN) to utilize these awareness features as transformation guidance during the decoding phase. In our proposed DAGN, we develop a cross-feature fusion module to maintain the consistency of compression-insensitive features by fusing compression-insensitive features into the artifacts reduction baseline. Our method achieves an average 2.06 dB PSNR gains on BSD500, outperforming state-of-the-art methods, and only requires 29.7 ms to process one image on BSD500. Besides, the experimental results on LIVE1 and LIU4K also demonstrate the efficiency, effectiveness, and superiority of the proposed method in terms of quantitative metrics, visual quality, and downstream machine vision tasks.


Capability enhancement of the X-ray micro-tomography system via ML-assisted approaches

Shah, Dhruvi, Mehta, Shruti, Agrawal, Ashish, Purohit, Shishir, Chaudhury, Bhaskar

arXiv.org Artificial Intelligence

Ring artifacts in X-ray micro-CT images are one of the primary causes of concern in their accurate visual interpretation and quantitative analysis. The geometry of X-ray micro-CT scanners is similar to the medical CT machines, except the sample is rotated with a stationary source and detector. The ring artifacts are caused by a defect or non-linear responses in detector pixels during the MicroCT data acquisition. Artifacts in MicroCT images can often be so severe that the images are no longer useful for further analysis. Therefore, it is essential to comprehend the causes of artifacts and potential solutions to maximize image quality. This article presents a convolution neural network (CNN)-based Deep Learning (DL) model inspired by UNet with a series of encoder and decoder units with skip connections for removal of ring artifacts. The proposed architecture has been evaluated using the Structural Similarity Index Measure (SSIM) and Mean Squared Error (MSE). Additionally, the results are compared with conventional filter-based non-ML techniques and are found to be better than the latter.


Improving Automated Hemorrhage Detection in Sparse-view Computed Tomography via Deep Convolutional Neural Network based Artifact Reduction

Thalhammer, Johannes, Schultheiss, Manuel, Dorosti, Tina, Lasser, Tobias, Pfeiffer, Franz, Pfeiffer, Daniela, Schaff, Florian

arXiv.org Artificial Intelligence

Purpose: Sparse-view computed tomography (CT) is an effective way to reduce dose by lowering the total number of views acquired, albeit at the expense of image quality, which, in turn, can impact the ability to detect diseases. We explore deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Methods: We trained a U-Net for artefact reduction on simulated sparse-view cranial CT scans from 3000 patients obtained from a public dataset and reconstructed with varying levels of sub-sampling. Additionally, we trained a convolutional neural network on fully sampled CT data from 17,545 patients for automated hemorrhage detection. We evaluated the classification performance using the area under the receiver operator characteristic curves (AUC-ROCs) with corresponding 95% confidence intervals (CIs) and the DeLong test, along with confusion matrices. The performance of the U-Net was compared to an analytical approach based on total variation (TV). Results: The U-Net performed superior compared to unprocessed and TV-processed images with respect to image quality and automated hemorrhage diagnosis. With U-Net post-processing, the number of views can be reduced from 4096 (AUC-ROC: 0.974; 95% CI: 0.972-0.976) views to 512 views (0.973; 0.971-0.975) with minimal decrease in hemorrhage detection (P<.001) and to 256 views (0.967; 0.964-0.969) with a slight performance decrease (P<.001). Conclusion: The results suggest that U-Net based artifact reduction substantially enhances automated hemorrhage detection in sparse-view cranial CTs. Our findings highlight that appropriate post-processing is crucial for optimal image quality and diagnostic accuracy while minimizing radiation dose.


Annealed Score-Based Diffusion Model for MR Motion Artifact Reduction

Oh, Gyutaek, Lee, Jeong Eun, Ye, Jong Chul

arXiv.org Artificial Intelligence

Motion artifact reduction is one of the important research topics in MR imaging, as the motion artifact degrades image quality and makes diagnosis difficult. Recently, many deep learning approaches have been studied for motion artifact reduction. Unfortunately, most existing models are trained in a supervised manner, requiring paired motion-corrupted and motion-free images, or are based on a strict motion-corruption model, which limits their use for real-world situations. To address this issue, here we present an annealed score-based diffusion model for MRI motion artifact reduction. Specifically, we train a score-based model using only motion-free images, and then motion artifacts are removed by applying forward and reverse diffusion processes repeatedly to gradually impose a low-frequency data consistency. Experimental results verify that the proposed method successfully reduces both simulated and in vivo motion artifacts, outperforming the state-of-the-art deep learning methods.


TriDoNet: A Triple Domain Model-driven Network for CT Metal Artifact Reduction

Shi, Baoshun, Jiang, Ke, Zhang, Shaolei, Lian, Qiusheng, Qin, Yanwei

arXiv.org Artificial Intelligence

Recent deep learning-based methods have achieved promising performance for computed tomography metal artifact reduction (CTMAR). However, most of them suffer from two limitations: (i) the domain knowledge is not fully embedded into the network training; (ii) metal artifacts lack effective representation models. The aforementioned limitations leave room for further performance improvement. Against these issues, we propose a novel triple domain model-driven CTMAR network, termed as TriDoNet, whose network training exploits triple domain knowledge, i.e., the knowledge of the sinogram, CT image, and metal artifact domains. Specifically, to explore the non-local repetitive streaking patterns of metal artifacts, we encode them as an explicit tight frame sparse representation model with adaptive thresholds. Furthermore, we design a contrastive regularization (CR) built upon contrastive learning to exploit clean CT images and metal-affected images as positive and negative samples, respectively. Experimental results show that our TriDoNet can generate superior artifact-reduced CT images.


AI-CHD

Communications of the ACM

Congenital heart disease (CHD), the most common congenital birth defect, has long been known as one of the main causes of infant death during the first year of life.1 More than one million of the world's approximately 135 million newborns are born each year with CHD.21 Over the last century, cardiac surgery has been an effective approach to tackling CHD; its remarkable advance has decreased the mortality rate of newborns with CHD.10 However, that lower mortality rate is mostly observed in developed countries rather than developing ones. Surgical treatment of CHD requires highly skilled surgeons along with complex infrastructures and equipment. While developed countries have perfected their treatment of CHD for more than 50 years, developing countries are still in the early stages. It is estimated that the number of congenital cardiac surgeons needs to increase by 1,250 times to satisfy only the basic needs of CHD treatment worldwide,16 and most of those surgeons reside in developed countries. As a result, the mortality rate in developing countries is currently at 20%, strikingly higher than the 3% to 7% in developed countries,16 not to mention the fact that mortality rates in developing countries are likely underreported due to the lack of proper diagnosis. Remote surgery has been an active field for decades, enabling experienced surgeons to remotely instruct robots (telerobotics) or guide less-experienced surgeons (surgical telementoring).8


Recursive Fusion and Deformable Spatiotemporal Attention for Video Compression Artifact Reduction

Zhao, Minyi, Xu, Yi, Zhou, Shuigeng

arXiv.org Artificial Intelligence

A number of deep learning based algorithms have been proposed to recover high-quality videos from low-quality compressed ones. Among them, some restore the missing details of each frame via exploring the spatiotemporal information of neighboring frames. However, these methods usually suffer from a narrow temporal scope, thus may miss some useful details from some frames outside the neighboring ones. In this paper, to boost artifact removal, on the one hand, we propose a Recursive Fusion (RF) module to model the temporal dependency within a long temporal range. Specifically, RF utilizes both the current reference frames and the preceding hidden state to conduct better spatiotemporal compensation. On the other hand, we design an efficient and effective Deformable Spatiotemporal Attention (DSTA) module such that the model can pay more effort on restoring the artifact-rich areas like the boundary area of a moving object. Extensive experiments show that our method outperforms the existing ones on the MFQE 2.0 dataset in terms of both fidelity and perceptual effect. Code is available at https://github.com/zhaominyiz/RFDA-PyTorch.


Three-dimensional Generative Adversarial Nets for Unsupervised Metal Artifact Reduction

Nakao, Megumi, Imanishi, Keiho, Ueda, Nobuhiro, Imai, Yuichiro, Kirita, Tadaaki, Matsuda, Tetsuya

arXiv.org Artificial Intelligence

--The reduction of metal artifacts in computed tomography (CT) images, specifically for strong artifacts generated from multiple metal objects, is a challenging issue in medical imaging research. Although there have been some studies on supervised metal artifact reduction through the learning of synthesized artifacts, it is difficult for simulated artifacts to cover the complexity of the real physical phenomena that may be observed in X-ray propagation. In this paper, we introduce metal artifact reduction methods based on an unsupervised volume-to-volume translation learned from clinical CT images. We construct three-dimensional adversarial nets with a regularized loss function designed for metal artifacts from multiple dental fillings. The results of experiments using 915 CT volumes from real patients demonstrate that the proposed framework has an outstanding capacity to reduce strong artifacts and to recover underlying missing voxels, while preserving the anatomical features of soft tissues and tooth structures from the original images. EDICAL procedures such as diagnosis, surgical planning, and radiotherapy can be seriously degraded by the presence of metal artifacts in computed tomography (CT) imaging. Metal objects such as dental fillings, fixation devices, and other electric instruments implanted in patients' bodies inhibit X-ray propagation [1], preventing accurate calculation of the CT values during image reconstruction and yielding dark bands or streak artifacts in the CT images [2][3]. To correct the images, missing CT values for the underlying anatomical features must be compensated at the same time as the artifacts are removed. Although doctors make clinical efforts to manually collect such artifacts, this is a labor-intensive and time-consuming task. M. Nakao and T. Matsuda are with the Graduate School of Informatics, Kyoto University, Y oshida-Honmachi, Sakyo, Kyoto 606-8501, JAP AN; email: megumi@i.kyoto-u.ac.jp.